On Globally Linear Convergence of Dual Gradient Descent Method for Sparse Solutions

نویسندگان

  • Pinghua Gong
  • Ming-Jun Lai
  • Jieping Ye
چکیده

In [14], researchers studied the convergence of a dual gradient descent algorithm for sparse solutions of undetermined linear systems and showed that it has a globally linear convergence. In this paper we present another analysis. Mainly, we remove one assumption of completely full rankness on the linear system from the convergence result in [14] and provide with a different argument which significantly simplifies the proof of the linear convergence in [14].

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Randomized Sparse Block Kaczmarz as Randomized Dual Block-Coordinate Descent

We show that the Sparse Kaczmarz method is a particular instance of the coordinate gradient method applied to an unconstrained dual problem corresponding to a regularized `1-minimization problem subject to linear constraints. Based on this observation and recent theoretical work concerning the convergence analysis and corresponding convergence rates for the randomized block coordinate gradient ...

متن کامل

Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all o...

متن کامل

A Voted Regularized Dual Averaging Method for Large-Scale Discriminative Training in Natural Language Processing

We propose a new algorithm based on the dual averaging method for large-scale discriminative training in natural language processing (NLP), as an alternative to the perceptron algorithms or stochastic gradient descent (SGD). The new algorithm estimates parameters of linear models by minimizing L1 regularized objectives and are effective in obtaining sparse solutions, which is particularly desir...

متن کامل

Indexed Learning for Large-Scale Linear Classification

Linear classification has achieved complexity linear to the data size. However, in many applications, large-scale data contains only a few samples that can improve the target objective. In this paper, we propose a sublinear-time algorithm that uses Nearest-Neighbor-based Coordinate Descent method to solve Linear SVM with truncated loss. In particular, we propose a sequential relaxation that sol...

متن کامل

Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization

Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure for finding sparse solutions of underdetermined linear systems. This method has been shown to have strong theoretical guarantee and impressive numerical performance. In this paper, we generalize HTP from compressive sensing to a generic problem setup of sparsity-constrained convex optimization. The proposed algorithm ite...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015